Goto

Collaborating Authors

 real-valued function




A Novel Data-Dependent Learning Paradigm for Large Hypothesis Classes

Pour, Alireza F., Ben-David, Shai

arXiv.org Machine Learning

We address the general task of learning with a set of candidate models that is too large to have a uniform convergence of empirical estimates to true losses. While the common approach to such challenges is SRM (or regularization) based learning algorithms, we propose a novel learning paradigm that relies on stronger incorporation of empirical data and requires less algorithmic decisions to be based on prior assumptions. We analyze the generalization capabilities of our approach and demonstrate its merits in several common learning assumptions, including similarity of close points, clustering of the domain into highly label-homogeneous regions, Lipschitzness assumptions of the labeling rule, and contrastive learning assumptions. Our approach allows utilizing such assumptions without the need to know their true parameters a priori.


Smooth Interactive Submodular Set Cover

Neural Information Processing Systems

Interactive submodular set cover is an interactive variant of submodular set cover over a hypothesis class of submodular functions, where the goal is to satisfy all sufficiently plausible submodular functions to a target threshold using as few (cost-weighted) actions as possible. It models settings where there is uncertainty regarding which submodular function to optimize. In this paper, we propose a new extension, which we call smooth interactive submodular set cover, that allows the target threshold to vary depending on the plausibility of each hypothesis. We present the first algorithm for this more general setting with theoretical guarantees on optimality. We further show how to extend our approach to deal with real-valued functions, which yields new theoretical results for real-valued submodular set cover for both the interactive and non-interactive settings.






A Proof of Theorem

Neural Information Processing Systems

Eq. 7 implies that gradient operator is Below we supplement the Lemma A.1 used to prove Theorem 1. ( j 1) Jacobian matrix, the second equality is due to the induction hypothesis, and the third equality is an adoption of chain rule. Then by induction, we can conclude the proof. For a sake of clarity, we first introduce few notations in algebra and real analysis. Definition B.2. (Differential Operator) Suppose a compact set Definition B.4. (F ourier Transform) Given real-valued function Definition B.5. (Convolution) Given two real-valued functions Before we prove Theorem 2, we enumerate the following results as our key mathematical tools: First of all, we note the following well-known result without a proof. Lemma B.2. (Stone-W eierstrass Theorem) Suppose A C ( X, R) is a unital sub-algebra which separates points in X .


Disentangling Fine-Tuning from Pre-Training in Visual Captioning with Hybrid Markov Logic

Shah, Monika, Sarkhel, Somdeb, Venugopal, Deepak

arXiv.org Artificial Intelligence

Multimodal systems have highly complex processing pipelines and are pretrained over large datasets before being fine-tuned for specific tasks such as visual captioning. However, it becomes hard to disentangle what the model learns during the fine-tuning process from what it already knows due to its pretraining. In this work, we learn a probabilistic model using Hybrid Markov Logic Networks (HMLNs) over the training examples by relating symbolic knowledge (extracted from the caption) with visual features (extracted from the image). For a generated caption, we quantify the influence of training examples based on the HMLN distribution using probabilistic inference. We evaluate two types of inference procedures on the MSCOCO dataset for different types of captioning models. Our results show that for BLIP2 (a model that uses a LLM), the fine-tuning may have smaller influence on the knowledge the model has acquired since it may have more general knowledge to perform visual captioning as compared to models that do not use a LLM